Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original\nclean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated fromnoisy measurements.\nThis, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is\nhigh), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for\nexample, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete\nsparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the\nanalysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the\noriginal signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion\nto avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with\nthree baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms
Loading....